19 research outputs found
Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA laye
Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer
From Grids to Places
Hafting et al. (2005) described grid cells in the dorsocaudal region of the medial enthorinal cortex (dMEC). These cells show a strikingly regular grid-like firing-pattern as a function of the position of a rat in an enclosure. Since the dMEC projects to the hippocampal areas
containing the well-known place cells, the question arises whether and how the localized responses of the latter can emerge based on the output of grid cells.
Here, we show that, starting with simulated grid-cells,
a simple linear transformation maximizing sparseness
leads to a localized representation similar to place
fields
Identification of High-level Object Manipulation Operations from Multimodal Input
Barchunova A, Franzius M, Pardowitz M, Ritter H. Identification of High-level Object Manipulation Operations from Multimodal Input. Presented at the IASTED International Conferences on Automation, Control, and Information Technology
Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis.
We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent's hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios
Multimodal Segmentation of Object Manipulation Sequences with Product Models
Barchunova A, Haschke R, Franzius M, Ritter H. Multimodal Segmentation of Object Manipulation Sequences with Product Models. Presented at the International Conference on Multimodal Interaction, Alicante